Greg Detre
Thursday, April 10, 2003
I agree
that the various descriptions given of the unconscious are problematic and
possibly inconsistent. I'm a little unhappy with the eliminativist view of
consciousness (major adherents include: Dennett[1],
Minsky, Sloman[2] etc.).
Fortunately though, I think we can more or less ignore that whole discussion
about whether there are such things as 'qualia' (i.e. mental states that have
the distinctive quality that there is something it is like to experience them),
if we talk instead (as you sensibly do) about 'awareness', i.e. whether or not
certain information is available/accessible for higher-level processing. this
kind of corresponds to Ned Block's[3]
distinction between:
phenomenal consciousness = the raw �feel� of experience
(qualia)
access consciousness = the accessibility of experience
to verbal report and use in intentional control
When we're
talking about computational theories, we're really only interested in this
second (and more tractable) type. As I understand it, your solution is to add a
second dimension to the Model 6, where the awareness somehow ebbs and flows
according to the problem, and where processes at any level can be either
conscious or unconscious (that is to say, we may or may not be aware of them).
I feel that you could probably simplify this idea just by sticking with
Minsky's Model 6 one-dimensional hierarchy, and saying that each process gets
some sort of tag or value as it�s spawned, determining whether its contents are
accessible to high-level probing and introspection.
After all,
we know that there�s some information that's absolutely not available to me, no
matter how hard I introspect (e.g. being able to see the blood vessels between
my retina and the outside world), some that I can access if I try (e.g. the
pattern of my breathing, or the distant hum of traffic), and some that I can
only process when my attention is squarely focused upon it (I can't think of a
good example of this - maybe programming). We usually talk in terms of
different 'quantities' (for want of a better term) of awareness, as though
there's a continuum, but given the diversity of computational processes
underlying these different abilities, it's possible that the continuum masks a
much richer underlying hierarchy of qualitatively different types of
awareness. So, one upshot of looking at the kinds of Minskian agents that we�re
aware of would be that we might be able to tease apart different types of
awareness. This means that we should be looking at which kinds of computational
processes (i.e. agents/resources/cascades) are accessible, and which aren't.
I'm not actually convinced that a taxonomy along these lines is really
possible, but it's kind of being implicitly attempted when researchers try and
examine blindsight, lesions, stimulating neurons during brain operations etc.
I've never
got much further in my thinking than this, but I suspect that Minsky might
suggest that we look to computer science for inspiration when trying to describe
different types of computational processes. We could conceivably try and
distinguish stuff like:
This is
just a list of the top of my head. For starters, it might not be at all
feasible right now, because we don�t have the foggiest clue how (or even really
if) the brain actually implements the kinds of symbolic agents that
Minsky speculates about. And of course, we should also take into account more
traditional factors when looking for the neural correlate(s) of consciousness,
e.g.
Anyway,
this has digressed further than I intended. In summary, I agree that Minsky�s
position on �unconscious processes� is unclear, but I�m not convinced that
there�s a simple binary distinction between processes/agents that we�re aware
of and those that we aren�t, that the continuum may even admit hierarchical
categorisation of types of awareness, and that in the distant future this
question could prove to be an entirely empirical one.